Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gh 2179 transformer pooling #2180

Merged
merged 5 commits into from
Mar 24, 2021
Merged

Gh 2179 transformer pooling #2180

merged 5 commits into from
Mar 24, 2021

Conversation

kishaloyhalder
Copy link
Collaborator

Added 'mean', 'max' pooling strategy for TransformerDocumentEmbeddings class.

@kishaloyhalder
Copy link
Collaborator Author

Verified the correctness with sentence-transformers' implementation of the same.

from flair.embeddings.document import TransformerDocumentEmbeddings
from flair.data import Sentence
from sentence_transformers import SentenceTransformer, models
import torch

text = "Good Morning"
document_embeddings = TransformerDocumentEmbeddings(model="bert-base-uncased", pooling="mean")

# option 1
sentence = Sentence(text, use_tokenizer=False)
document_embeddings.embed(sentence)
embedding_1 = sentence.get_embedding()
print(embedding_1.shape)
print(embedding_1[:20])

# option 2
document_embeddings.pooling = 'max'
sentence = Sentence(text, use_tokenizer=False)
document_embeddings.embed(sentence)
embedding_2 = sentence.get_embedding()
print(embedding_2.shape)
print(embedding_2[:20])

# option 3
document_embeddings.pooling = 'cls'
sentence = Sentence(text, use_tokenizer=False)
document_embeddings.embed(sentence)
embedding_3 = sentence.get_embedding()
print(embedding_3.shape)
print(embedding_3[:20])


word_embedding_model = models.Transformer('bert-base-uncased', max_seq_length=512)
pooling_model = models.Pooling(word_embedding_model.get_word_embedding_dimension())

model = SentenceTransformer(modules=[word_embedding_model, pooling_model])

# option 1'
embedding_11 = model.encode(text, convert_to_tensor=True)
print(embedding_11.shape)
print(embedding_11[:20])

# option 2'
pooling_model.pooling_mode_cls_token = False
pooling_model.pooling_mode_mean_tokens = False
pooling_model.pooling_mode_max_tokens = True
embedding_21 = model.encode(text, convert_to_tensor=True)
print(embedding_21.shape)
print(embedding_21[:20])

# option 3'
pooling_model.pooling_mode_cls_token = True
pooling_model.pooling_mode_mean_tokens = False
pooling_model.pooling_mode_max_tokens = False
embedding_31 = model.encode(text, convert_to_tensor=True)
print(embedding_31.shape)
print(embedding_31[:20])

assert torch.all(torch.eq(embedding_1, embedding_11))
assert torch.all(torch.eq(embedding_2, embedding_21))
assert torch.all(torch.eq(embedding_3, embedding_31))

flair/embeddings/document.py Outdated Show resolved Hide resolved
flair/embeddings/document.py Outdated Show resolved Hide resolved
@alanakbik
Copy link
Collaborator

@kishaloyhalder thanks a lot for adding this!

@alanakbik alanakbik merged commit 4b1bf17 into master Mar 24, 2021
@alanakbik alanakbik deleted the GH-2179-transformer-ppoling branch March 24, 2021 20:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants